在本文中,我们建立了高效且取消耦合的学习动力学,因此,当由所有玩家在多人游戏中使用Perfect-Recall Inderfect Interfect Inderfection Formfortation Gartensive Games时,每个玩家的\ emph {触发后悔}会成长为$ o(\ log t t t t t t )$ $ t $重复播放。这比$ o(t^{1/4})$的先前最著名的触发regret键呈指数改进,并解决了Bai等人最近的一个开放问题。 (2022)。作为直接的结果,我们保证以$ \ frac {\ log log t} {t} $的接近速率以接近{粗相关的平衡}融合。基于先前的工作,我们的构造核心是关于从\ emph {polyenmial genter}衍生的固定点的更一般的结果,这是我们为\ emph {(粗)触发偏差函数建立的属性}。此外,我们的构造利用了凸壳的精制\ textit {遗憾电路},与先验保证不同 - 保留了Syrgkanis等人引入的\ emph {rvu属性}。 (NIPS,2015年);这种观察对基于CFR型遗憾的分解,在学习动态下建立近乎最佳的遗憾具有独立的兴趣。
translated by 谷歌翻译
计算NASH平衡策略是多方面强化学习中的一个核心问题,在理论和实践中都受到广泛关注。但是,到目前为止,可证明的保证金仅限于完全竞争性或合作的场景,或者在大多数实际应用中实现难以满足的强大假设。在这项工作中,我们通过调查Infinite-Horizo​​n \ Emph {对抗性团队Markov Games},这是一场自然而充分动机的游戏,其中一组相同兴奋的玩家 - 在没有任何明确的情况下,这是一个自然而有动机的游戏,这是一场自然而有动机的游戏,而偏离了先前的结果。协调或交流 - 正在与对抗者竞争。这种设置允许对零和马尔可夫潜在游戏进行统一处理,并作为模拟更现实的战略互动的一步,这些互动具有竞争性和合作利益。我们的主要贡献是第一种计算固定$ \ epsilon $ - Approximate Nash Equilibria在对抗性团队马尔可夫游戏中具有计算复杂性的算法,在游戏的所有自然参数中都是多项式的,以及$ 1/\ epsilon $。拟议的算法特别自然和实用,它基于为团队中的每个球员执行独立的政策梯度步骤,并与对手侧面的最佳反应同时;反过来,通过解决精心构造的线性程序来获得对手的政策。我们的分析利用非标准技术来建立具有非convex约束的非线性程序的KKT最佳条件,从而导致对诱导的Lagrange乘数的自然解释。在此过程中,我们大大扩展了冯·斯坦格尔(Von Stengel)和科勒(GEB`97)引起的对抗(正常形式)团队游戏中最佳政策的重要特征。
translated by 谷歌翻译
最近的一项工作已经建立了未耦合的学习动力学,以至于当所有玩家在游戏中使用所有玩家时,每个玩家的\ emph {sorex} $ t $ recretitions在$ t $中增长了polygarithmarithm,这是$ t $的指数改进,比指数级的改进,比传统的保证在无缩写框架。但是,到目前为止,这些结果仅限于具有结构化策略空间的某些类别的游戏,例如正常形式和广泛形式的游戏。关于$ o(\ text {polylog} t)$遗憾界限是否可以为一般凸和紧凑型策略集获得的问题 - 这在经济学和多种系统中的许多基本模型中都发生 - 同时保留有效的策略更新是一种重要的问题。在本文中,我们通过建立$ o(\ log t)$ player后悔的第一个未耦合学习算法来回答这一点凸和紧凑的策略集。我们的学习动力基于对适当的\ emph {升起}空间的乐观跟随领导者的实例化,使用\ emph {self-condcordant正规器},这是特殊的,这不是可行区域的障碍。此外,我们的学习动力是可以有效地实现的,如果可以访问登录策略的近端甲骨文,从而导致$ o(\ log \ log \ log t)$ ter-ter-ter-tir-tir-tir-tir-tir-tir-tir-tir-tir-tir-tir-tir-tirceptimity;当仅假设仅对\ emph {Linear}优化Oracle访问时,我们还会给出扩展。最后,我们调整动力学以保证对抗性制度中的$ O(\ sqrt {t})$遗憾。即使在适用先前结果的特殊情况下,我们的算法也会改善最先进的遗憾界限,无论是依赖迭代次数还是对策略集的维度的依赖。
translated by 谷歌翻译
最近,Daskalakis,Fisselson和Golowich(DFG)(Neurips`21)表明,如果所有代理在多人普通和正常形式游戏中采用乐观的乘法权重更新(OMWU),每个玩家的外部遗憾是$ o(\ textrm {polylog}(t))$ the游戏的$重复。我们从外部遗憾扩展到内部遗憾并交换后悔,从而建立了以$ \ tilde {o}的速率收敛到近似相关均衡的近似相关均衡(t ^ { - 1})$。由于陈和彭(神经潜行群岛20),这实质上提高了以陈和彭(NEURIPS20)的相关均衡的相关均衡率,并且在无遗憾的框架内是最佳的 - 以$ $ $ to to polylogarithmic因素。为了获得这些结果,我们开发了用于建立涉及固定点操作的学习动态的高阶平滑的新技术。具体而言,我们确定STOLTZ和LUGOSI(Mach Learn`05)的无内部遗憾学习动态在组合空间上的无外部后悔动态等效地模拟。这使我们可以在指数大小的集合上交易多项式大型马尔可夫链的计算,用于在指数大小的集合上的(更良好的良好)的线性变换,使我们能够利用类似的技术作为DGF到接近最佳地结合内心遗憾。此外,我们建立了$ O(\ textrm {polylog}(t))$ no-swap-recreet遗憾的blum和mansour(bm)的经典算法(JMLR`07)。我们这样做是通过基于Cauchy积分的技术来介绍DFG的更有限的组合争论。除了对BM的近乎最优遗憾保证的阐明外,我们的论点还提供了进入各种方式的洞察,其中可以在分析更多涉及的学习算法中延长和利用DFG的技术。
translated by 谷歌翻译
In recent years distributional reinforcement learning has produced many state of the art results. Increasingly sample efficient Distributional algorithms for the discrete action domain have been developed over time that vary primarily in the way they parameterize their approximations of value distributions, and how they quantify the differences between those distributions. In this work we transfer three of the most well-known and successful of those algorithms (QR-DQN, IQN and FQF) to the continuous action domain by extending two powerful actor-critic algorithms (TD3 and SAC) with distributional critics. We investigate whether the relative performance of the methods for the discrete action space translates to the continuous case. To that end we compare them empirically on the pybullet implementations of a set of continuous control tasks. Our results indicate qualitative invariance regarding the number and placement of distributional atoms in the deterministic, continuous action setting.
translated by 谷歌翻译
Data scarcity is one of the main issues with the end-to-end approach for Speech Translation, as compared to the cascaded one. Although most data resources for Speech Translation are originally document-level, they offer a sentence-level view, which can be directly used during training. But this sentence-level view is single and static, potentially limiting the utility of the data. Our proposed data augmentation method SegAugment challenges this idea and aims to increase data availability by providing multiple alternative sentence-level views of a dataset. Our method heavily relies on an Audio Segmentation system to re-segment the speech of each document, after which we obtain the target text with alignment methods. The Audio Segmentation system can be parameterized with different length constraints, thus giving us access to multiple and diverse sentence-level views for each document. Experiments in MuST-C show consistent gains across 8 language pairs, with an average increase of 2.2 BLEU points, and up to 4.7 BLEU for lower-resource scenarios in mTEDx. Additionally, we find that SegAugment is also applicable to purely sentence-level data, as in CoVoST, and that it enables Speech Translation models to completely close the gap between the gold and automatic segmentation at inference time.
translated by 谷歌翻译
The cyber-physical convergence is opening up new business opportunities for industrial operators. The need for deep integration of the cyber and the physical worlds establishes a rich business agenda towards consolidating new system and network engineering approaches. This revolution would not be possible without the rich and heterogeneous sources of data, as well as the ability of their intelligent exploitation, mainly due to the fact that data will serve as a fundamental resource to promote Industry 4.0. One of the most fruitful research and practice areas emerging from this data-rich, cyber-physical, smart factory environment is the data-driven process monitoring field, which applies machine learning methodologies to enable predictive maintenance applications. In this paper, we examine popular time series forecasting techniques as well as supervised machine learning algorithms in the applied context of Industry 4.0, by transforming and preprocessing the historical industrial dataset of a packing machine's operational state recordings (real data coming from the production line of a manufacturing plant from the food and beverage domain). In our methodology, we use only a single signal concerning the machine's operational status to make our predictions, without considering other operational variables or fault and warning signals, hence its characterization as ``agnostic''. In this respect, the results demonstrate that the adopted methods achieve a quite promising performance on three targeted use cases.
translated by 谷歌翻译
Automated Machine Learning-based systems' integration into a wide range of tasks has expanded as a result of their performance and speed. Although there are numerous advantages to employing ML-based systems, if they are not interpretable, they should not be used in critical, high-risk applications where human lives are at risk. To address this issue, researchers and businesses have been focusing on finding ways to improve the interpretability of complex ML systems, and several such methods have been developed. Indeed, there are so many developed techniques that it is difficult for practitioners to choose the best among them for their applications, even when using evaluation metrics. As a result, the demand for a selection tool, a meta-explanation technique based on a high-quality evaluation metric, is apparent. In this paper, we present a local meta-explanation technique which builds on top of the truthfulness metric, which is a faithfulness-based metric. We demonstrate the effectiveness of both the technique and the metric by concretely defining all the concepts and through experimentation.
translated by 谷歌翻译
In this paper, we address the problem of image splicing localization with a multi-stream network architecture that processes the raw RGB image in parallel with other handcrafted forensic signals. Unlike previous methods that either use only the RGB images or stack several signals in a channel-wise manner, we propose an encoder-decoder architecture that consists of multiple encoder streams. Each stream is fed with either the tampered image or handcrafted signals and processes them separately to capture relevant information from each one independently. Finally, the extracted features from the multiple streams are fused in the bottleneck of the architecture and propagated to the decoder network that generates the output localization map. We experiment with two handcrafted algorithms, i.e., DCT and Splicebuster. Our proposed approach is benchmarked on three public forensics datasets, demonstrating competitive performance against several competing methods and achieving state-of-the-art results, e.g., 0.898 AUC on CASIA.
translated by 谷歌翻译
The sheer volume of online user-generated content has rendered content moderation technologies essential in order to protect digital platform audiences from content that may cause anxiety, worry, or concern. Despite the efforts towards developing automated solutions to tackle this problem, creating accurate models remains challenging due to the lack of adequate task-specific training data. The fact that manually annotating such data is a highly demanding procedure that could severely affect the annotators' emotional well-being is directly related to the latter limitation. In this paper, we propose the CM-Refinery framework that leverages large-scale multimedia datasets to automatically extend initial training datasets with hard examples that can refine content moderation models, while significantly reducing the involvement of human annotators. We apply our method on two model adaptation strategies designed with respect to the different challenges observed while collecting data, i.e. lack of (i) task-specific negative data or (ii) both positive and negative data. Additionally, we introduce a diversity criterion applied to the data collection process that further enhances the generalization performance of the refined models. The proposed method is evaluated on the Not Safe for Work (NSFW) and disturbing content detection tasks on benchmark datasets achieving 1.32% and 1.94% accuracy improvements compared to the state of the art, respectively. Finally, it significantly reduces human involvement, as 92.54% of data are automatically annotated in case of disturbing content while no human intervention is required for the NSFW task.
translated by 谷歌翻译